Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Do you pay for premium AI subscriptions?

  • Yes
  • No
  • I use someone else's paid one
  • What, in THIS economy?
  • I don't use AI, you insensitive clod!

[ Results | Polls ]
Comments:76 | Votes:214

posted by hubie on Wednesday April 15, @06:37PM   Printer-friendly

The scheme follows a string of security failures at SK Telecom, KT, and LG Uplus:

South Korea's Ministry of Science and ICT said on Thursday that SK Telecom, KT, and LG Uplus — the country’s three major carriers — will provide more than seven million mobile subscribers with unmetered 400 Kbps data once their monthly allowances run out. First floated as part of a broader package of consumer-protection measures being assembled in parallel with its response to spiking memory and PC component prices, Deputy Prime Minister and Minister for Science and ICT Bae Kyung-hoon announced the program as one of many new obligations imposed on the three carriers in response to a sequence of security failures over the past year, calling unlimited, universal access one of the “basic telecommunications rights” that operators are expected to fund themselves.

400 Kbps might not sound like much, especially given that 5G can reach peak speeds in excess of 1 Gbps and standard-definition video streaming requires speeds of around 5 Mbps as a baseline, but it’s more than enough for very rudimentary activities like messaging and VoIP audio, or two-factor authentication.

It’s worth noting that the fallback to 400 Kbps only applies once a customer burns through their paid monthly cap, replacing the hard cutoff or overage charges that previously kicked in on affected plans.

Alongside the obligation to provide unmetered 400 Kbps access, the three operators have committed to increasing data and calling allowances for seniors, upgrading Wi-Fi services on public transport, and introducing 5G plans priced at $13.50 or below. Bae also pushed the carriers to direct more capital toward network buildout for AI workloads.

"Having gone through last year's hacking incidents, the weight of the telecom companies' responsibilities and roles has become even clearer," Bae said in a press release, emphasizing, “We have now reached a point where we must move beyond pledges not to repeat past mistakes and respond with renewal and contribution at a level of complete transformation that the public can tangibly feel." He went on to say that it’s important for the government to contribute to people’s livelihoods, including by guaranteeing what he called “basic telecommunications rights” for all citizens.

Each of the three network operators has been hit by a significant security incident in recent months. SK Telecom suffered a large-scale subscriber data leak, whereas KT was found to have deliberately pushed malware to roughly 600,000 of its own subscribers who were using a third-party BitTorrent-based file-sharing service, resulting in missing files and disabled PCs.


Original Submission

posted by jelizondo on Wednesday April 15, @01:52PM   Printer-friendly

https://phys.org/news/2026-04-electrode-technology-efficiency-plastic-precursors.html

In the process of converting carbon dioxide into useful chemicals such as ethylene—a key precursor for plastics—a major challenge has been the flooding of electrodes, where electrolyte penetrates the electrode structure and reduces performance. KAIST researchers have developed a new electrode design that blocks water while maintaining efficient electrical conduction and catalytic reactions, thereby improving both efficiency and stability.

A research team led by Professor Hyunjoon Song from the Department of Chemistry has developed a novel electrode structure utilizing silver nanowire networks—ultrafine silver wires arranged like a spiderweb—to significantly enhance the efficiency of electrochemical CO₂ conversion to useful chemical products. The research was published in Advanced Science.

In electrochemical CO₂ conversion processes, a long-standing issue has been flooding, where the electrode becomes saturated with electrolyte, reducing the space available for CO₂ to react. While hydrophobic materials can prevent water intrusion, they typically suffer from low electrical conductivity, requiring additional components and complicating the system.

To overcome this, the research team designed a three-layer electrode architecture that simultaneously repels water and enables efficient charge transport. The structure consists of a hydrophobic substrate, a catalyst layer, and an overlaid silver nanowire (Ag NW) network, which acts as an efficient current collector while preventing electrolyte flooding.

A key finding of this study is that the silver nanowires do more than just conduct electricity—they actively participate in the chemical reaction. During CO₂ reduction, the silver nanowires generate carbon monoxide (CO), which is then transferred to adjacent copper-based catalysts, where further reactions occur.

This creates a tandem catalytic system, in which two catalysts cooperate sequentially, significantly enhancing the production of multi-carbon compounds such as ethylene.

The electrode demonstrated outstanding performance. It achieved 79% selectivity toward C₂₊ products in alkaline electrolytes and 86% selectivity in neutral electrolytes, representing a world-leading level. It also maintained stable operation for more than 50 hours without performance degradation.

These results indicate that most of the converted products are the desired chemicals, while also overcoming the durability limitations of conventional systems.

Professor Hyunjoon Song stated, "This study is significant in showing that silver nanowires not only serve as electrical conductors but also directly participate in chemical reactions," adding, "This approach provides a new design strategy that can be extended to converting CO₂ into a wide range of valuable products such as ethanol and fuels."

Provided by The Korea Advanced Institute of Science and Technology (KAIST)

Jonghyeok Park et al, Overlaid Conductive Silver Nanowire Networks on Gas Diffusion Electrodes for High‐Performance Electrochemical CO2‐to‐C2+Conversion, Advanced Science (2026). DOI: 10.1002/advs.75003

Journal information: Advanced Science


Original Submission

posted by jelizondo on Wednesday April 15, @09:07AM   Printer-friendly
from the the-sheriff-is-in-town dept.

https://www.tomshardware.com/software/linux/linux-lays-down-the-law-on-ai-generated-code-yes-to-copilot-no-to-ai-slop-and-humans-take-the-fall-for-mistakes-after-months-of-fierce-debate-torvalds-and-maintainers-come-to-an-agreement

GZDoom, the over-20-year-old 3D accelerated source port of Doom, has been relegated to \"Historical\" status now after a battle over AI-generated code last year.

Legal headaches aside, project maintainers have also been fighting a losing battle against sheer volume. The open-source world is currently drowning in what the community has dubbed "AI slop." The creator of cURL had to close bug bounties after being flooded with hallucinated code, whiteboard tool tldraw began auto-closing external PRs in self-defense, and projects like Node.js and OCaml have seen massive, >10,000-line AI-generated patches spark existential debates among maintainers.

The cultural friction of undisclosed AI code has been even more volatile. Late last year, NVIDIA engineer and kernel maintainer Sasha Levin faced massive community backlash after it was revealed he submitted a patch to kernel 6.15 entirely written by an LLM without disclosing it, including the changelog. While the code was functional, it include a performance regression despite being reviewed and tested. The community pushed back hard against the idea of developers slapping their names on complex code they didn't actually write, and even Torvalds admitted the patch was not properly reviewed, partially because it was not labeled as AI-generated.

The GZDoom incident and the Sasha Levin backlash highlight exactly why the Linux kernel's new policy is so vital. Most of the developer community is less angry about the use of AI and more frustrated about the dishonesty surrounding it. By demanding an Assisted-by tag and enforcing strict human liability, the Linux kernel is attempting to strip the emotion out of the debate. Torvalds and the maintainers are acknowledging reality: developers are going to use AI tools to code faster, and trying to ban them is like trying to ban a specific brand of keyboard.

The bottom line is, if the code is good, then it's good. If it's hallucinatory AI slop that breaks the kernel, the human who clicked "submit" is the one who will have to answer to Linus Torvalds. In the open-source world, that's about as strong a deterrent as you can get.


Original Submission

posted by jelizondo on Wednesday April 15, @04:22AM   Printer-friendly

The AI Great Leap Forward:

In 1958, Mao ordered every village in China to produce steel. Farmers melted down their cooking pots in backyard furnaces and reported spectacular numbers. The steel was useless. The crops rotted. Thirty million people starved.

In 2026, every other company is having top down mandate on AI transformation.

Same energy.

The rallying cry of the Great Leap Forward was 超英趕美 — surpass England, catch up to America. Every province, every village, every household was expected to close the gap with industrialized Western nations by sheer force of will. Peasants who had never seen a factory were handed quotas for steel production. If enough people smelt enough iron, China becomes an industrial power overnight. Expertise was irrelevant. Conviction was sufficient.

The mandate today is identical, just swap the nouns. Every company, every function, every individual contributor is expected to close the AI gap. Ship AI features. Build agents. Automate workflows. That nobody on the team has ever trained a model, designed an evaluation system, or debugged a retrieval system is beside the point. Conviction is sufficient.

So everyone builds. PMs build AI dashboards. Marketing builds AI content generators. Sales ops builds AI lead scorers. Software engineers are building AI and data solutions that look pixel-perfect and function terribly. The UI is clean. The API is RESTful. The architecture diagram is beautiful. The outputs are wrong. Nobody checks because nobody on the team knows what correct outputs look like. They've never looked at the data. They've never computed a baseline.

Entire departments are stitching together n8n workflows and calling it AI — dozens of automated chains firing prompts into models, zero evaluation on any of them. These tools are merchants of complexity: they sell visual simplicity while generating spaghetti underneath. A drag-and-drop canvas makes it trivially easy to chain ten LLM calls together and impossibly hard to debug why the eighth one hallucinates on Tuesdays. The people building these workflows have never designed an evaluation pipeline, never measured model drift, never A/B tested a prompt. They don't need to — the canvas looks clean, the arrows point forward, the green checkmarks fire. The complexity isn't avoided. It's hidden behind a GUI where nobody with ML expertise will ever look.

The backyard steel of 1958 looked like steel. It was not steel. Today's backyard AI looks like AI. It is not AI. A TypeScript workflow with hardcoded if-else branches is not an agent. A prompt template behind a REST endpoint is not a model. Calling these things AI is like calling pig iron from a backyard furnace high-grade steel. It satisfies the reporting requirement. It fails every real-world test.

But the most dangerous furnace is the one that produces something functional. Teams are building demoware — pretty interfaces, working endpoints, impressive walkthroughs — with zero validation underneath. Some are in-housing SaaS products by vibe coding some frontend with coding agents: it runs, it has a dashboard, it cost a fraction of the vendor. Klarna announced in 2024 that it would replace Salesforce and other SaaS providers with internal AI-built solutions. What these replacements don't have is data infrastructure, error handling, monitoring, on-call support, security patching, or anyone who will maintain them after the builder gets promoted and moves on.

These apps will win awards at the next all-hands. In two years they'll be unmaintainable tech debt some poor soul inherits and rewrites from scratch. The furnace produced pig iron. Someone stamped "steel" on it. Now it's load-bearing.

Meanwhile, the actual product that customers pay for rots in the field. But hey, 超英趕美. The AI adoption dashboard is green.

The full article is an interesting read.


Original Submission

posted by hubie on Tuesday April 14, @11:35PM   Printer-friendly

https://www.politico.com/news/2026/04/13/missouri-city-council-data-center-00867259

Residents of a St. Louis suburb turned out in droves to unseat four incumbents just days after the council approved a development agreement for a $6 billion data center.

Tuesday's election in Festus, Missouri — a city of 12,000 people along the Mississippi River a half-hour south of St. Louis — is the latest example of growing public backlash against cities agreeing to host hyperscale data centers over the objections of residents concerned about their local impacts.


Original Submission

posted by hubie on Tuesday April 14, @06:53PM   Printer-friendly

The same Dave Plummer was the genius behind Windows’ ZIP file support:

Dave Plummer, the engineer behind many of Windows iconic features like ZIP file support, shared how he built the Task Manager to be so efficient. According to his YouTube video, the current Windows Task Manager is about 4MB, but the original version that he built was just 80K. Plummer’s main concern when he built the Windows utility was that hardware during that time was so limited, and that the tool that was used to recover the PC after everything had failed still needs to feel crisp and responsive, even if everything else had hung.

“Every line has a cost; every allocation can leave footprints. Every dependency is a roommate that eats your food and never pays rent,” said Plummer. “And so, when I ended up writing Task Manager, I didn’t approach it like a modern utility where you start with a framework, add nine layers of comfort, six layers of futureproofing, and then act surprised when the thing eats 800MBs and a motivational speech to display just a few numbers.”

One of Plummer’s favorite features on the Task Manager is how it handles startup. Unlike other apps that just check if another instance of the app is already running and activates it if there’s already one, this Windows tool goes one step further. It checks if the already existing instance, if there is one, is not frozen by sending it a private message and waiting for a reply. If it gets a positive response, then it’s a sign that the other Task Manager instance is fine and dandy, but if all it gets is silence, then it assumes that the other instance is also lost and would launch to help get you out of a rut.

Another thing that the engineer did was to load frequently used strings into globals once instead of fetching them over and over again, while rare functionalities, like ejecting a docked PC, are only loaded when needed. The process tree also saves resources by asking the kernel for the entire process table instead of querying programs one by one. This removes numerous API calls, and if its buffer is too small, it would resize the buffer and try again. Plummer also shared several tips and tricks that he used to ensure that Windows Task Manager did not take on more resources than necessary, allowing it to run smoothly on the limited computing power available at that time, even on systems that were already facing issues.

The processing and resource limitations of 90s computers forced Plummer to make the Windows Task Manager as lean as possible. “Task Manager came from a very different mindset. It came from a world where a page fault was something you felt, where low memory conditions had a weird smell, where if you made the wrong thing redraw too often, you could practically hear the guys in the offices moaning,” he said. “And while I absolutely do not want to go back to that old hardware, I do wish we had carried more of that taste. Not the suffering, the taste, the instinct to batch work, to cache the right things, to skip invisible work, to diff before repainting, to ask the kernel once instead of a hundred times, to load rare data rarely, to be suspicious of convenience when convenience sends a bill to the user.”


Original Submission

posted by janrinok on Tuesday April 14, @02:02PM   Printer-friendly

https://worldhistory.substack.com/p/tea-a-stimulant-that-made-the-modern

In the early modern period, the twin forces of global trade and colonialism introduced people around the world to foods, medicines, and diseases that had previously been confined to a certain region. One category of items seems to have been especially important: stimulants.

What fueled the feverish intellectual and commercial activity of the age? Certainly, the new availability of substances that provide an energy boost — from sugar to cocaine — played a role. For the next few weeks, we're going to be looking at the stimulants that made the modern world. First up — tea!

In the 1600s, an exciting new drug crossed the oceans in trade ships. It was exotic and rare, which only increased its allure. While under the influence, some people found that their minds raced, but others felt that it helped them to concentrate. It gave people unnatural amounts of energy and stamina. It did have side effects, though — some people got jittery, others felt their hearts race, and some couldn't sleep. Many people got hooked and felt like they couldn't function without the stuff. The London Gazette announced its arrival in 1658:

That Excellent, and by all Physitians approved, China Drink, called by Chineans, Tcha, by other Nations Tay, alias Tee, is sold at the Sultaness-head, a Cophee-house, in Sweeting's Rents by the Royal Exchange, London.

As you can see from the advertisement above, tea was not the first caffeine-delivery system to hit Europe in the early modern period. Coffee had shown up about a century before and provided a bigger hit of caffeine. But tea was something different, a beverage with subtler charms, a stimulant that somehow lent itself to soothing rituals. And no place was more charmed by tea than Britain.


Original Submission

posted by janrinok on Tuesday April 14, @09:12AM   Printer-friendly
from the computer-no-longer-inside dept.

Linux devs think even one second spent on 486 support is a second too many:

One point in favor of the sprawling Linux ecosystem is its broad hardware support—the kernel officially supports everything from '90s-era PC hardware to Arm-based Apple Silicon chips, thanks to decades of combined effort from hardware manufacturers and motivated community members.

But nothing can last forever, and for a few years now, Linux maintainers (including Linus Torvalds) have been pushing to drop kernel support for Intel's 80486 processor. This chip was originally introduced in 1989, was replaced by the first Intel Pentium in 1993, and was fully discontinued in 2007. Code commits suggest that Linux kernel version 7.1 will be the first to follow through, making it impossible to build a version of the kernel that will support the 486; Phoronix says that additional kernel changes to remove 486-related code will follow in subsequent kernel versions.

Although these chips haven't changed in decades, maintaining support for them in modern software isn't free.

"In the x86 architecture we have various complicated hardware emulation facilities on x86-32 to support ancient 32-bit CPUs that very, very few people are using with modern kernels," writes Linux kernel contributor Ingo Molnar in his initial patch removing 486 support from the kernel. "This compatibility glue is sometimes even causing problems that people spend time to resolve, which time could be spent on other things."

[...] "I get the nostalgia, like classic cars, but a car you've spent a year's worth of weekends fixing up isn't a daily driver," writes user andyj. "Some of the extensions I maintain, like rsyslog and mariadb, require that the CPU be set to i586 as they will no longer compile for i486. The end is already here."

Those still using a 486 for one reason or another will still be able to run older Linux kernels and vintage operating systems—running old software without emulation or virtualization is one of the few reasons to keep booting up hardware this old. If you demand an actively maintained OS, you still have options, though—the FreeDOS project isn't Linux, but it does still run on PCs going all the way back to the original IBM Personal Computer and its 16-bit Intel 8088.


Original Submission

posted by janrinok on Tuesday April 14, @04:29AM   Printer-friendly

These are cheaper and faster to build compared to engines built using traditional methods:

Beehive Industries, a startup jet engine manufacturer based in Colorado, just secured a $30 million contract from the U.S. Air Force (USAF) to continue the research and development of small 3D-printed jet engines for uncrewed aircraft and stand-off weapons. According to the company, the USAF funding is allocated for vehicle integration, flight testing, and qualification of the Frenzy 8 — the company’s flagship engine that delivers 200lbs of thrust — as well as the possible flight demonstration of the smaller 100lb-thrust Frenzy 6. By comparison, the F-16 Viper is powered by either a GE F110 or Pratt & Whitney F100 engine, both of which develop thrust of over 29,000lbs.

3D printing, more accurately called additive manufacturing, has been used by the aviation industry for over 10 years now. In fact, GE, which makes the LEAP engine found in the Airbus A320neo in partnership with Safran, has been using this technique to manufacture jet engine parts since 2016. But despite the industry's use of 3D printing, not just anyone can (or should) start printing airplane parts at home. These require special materials and construction techniques, or you may cause an accident if you make a mistake.

However, it appears that Beehive will use 3D printing to build the engine from top to bottom. This would allow the company to manufacture all the parts that it needs to assemble a turbojet instead of relying on a specialized supply chain that could easily be disrupted. More importantly, it would reduce the time required to design, test, and deploy an engine, as well as minimize its production cost — an issue that the U.S. military is contending with, especially as it sometimes uses expensive missiles to take down cheap drones.

“By harnessing additive manufacturing to collapse complex supply chains into scalable, 3D-printed propulsion, we are providing the ‘affordable mass’ essential to modern deterrence,” said Beehive Industries Chief Product Officer Gordie Follin said. “This collaboration ensures our warfighters will have the high-volume, mission-ready capabilities they need to maintain a competitive edge in any theater.” The company is competing against established giants like GE Aerospace, Pratt & Whitney, and Honeywell Aerospace for the small engine contract. This might seem like it’s disadvantaged, especially as these companies have established contracts with the Pentagon. However, all three have reported backlogs in various departments, meaning Beehive could probably deliver and maintain its engines much quicker than them.

Other nations are building their own micro turbojet engines, too. A Chinese state-backed firm showed off a fully 3D-printed design in 2025, delivering much over 350lbs of thrust at 13,000ft. Engines are one of the most expensive components on an aircraft, accounting for nearly 25% to 40% of the cost. By making cheaper alternatives to traditional manufacturing, militaries can reduce the acquisition and maintenance costs of drones and missiles, allowing them to keep their costs low and get more weapons for every dollar in the budget.


Original Submission

posted by janrinok on Monday April 13, @11:44PM   Printer-friendly
from the set-the-wild-echoes-flying dept.

A study reveals how individual tongue clicks and their echoes contribute to object sensing:

Navigating the world as a blind person sometimes involves using a cane, guide dog or wearable GPS system. For some, this toolkit includes echolocation. Producing tongue clicks and listening for echoes can be enough to gain information about nearby objects.

But even for expert echolocators, a single click is rarely enough to perceive an object. Echo after echo incrementally improves understanding, especially for expert echolocators, researchers report April 6 in eNeuro. The finding helps explain how the brain processes sound more generally.

[...] Many studies have shown that echolocation recruits visual areas of the brain and that performance improves significantly with practice. "What remained unexamined here was how this happens, how the information builds in real time, over individual echo signals," says cognitive neuroscientist Santani Teng at the Smith-Kettlewell Eye Research Institute in San Francisco.

[...] In line with previous research, expert echolocators were far better at determining the direction of an object than people who could see. One exceptional echolocator needed only to hear two sets of clicks and echoes to determine an object's direction. Unlike in previous studies, the team used the brain wave data to show that each click-echo pair added to the evidence the brain was accumulating to make the perceptual decision.

"The study suggests that in human echolocation, spatial representations are constructed by progressively accumulating acoustic evidence over time, rather than through a single 'optimal snapshot,' " says neuroscientist Monica Gori at the Italian Institute of Technology in Genoa and the Institute for Human & Machine Cognition in Florida, who was not involved in the study.

[...] "Echolocators have a truly remarkable skill, with real-life benefits, but it is not magic," Teng says.

Journal Reference: Haydée G García-Lázaro and Santani Teng eNeuro 6 April 2026, ENEURO.0342-25.2026; https://doi.org/10.1523/ENEURO.0342-25.2026


Original Submission

posted by janrinok on Monday April 13, @06:54PM   Printer-friendly

The ChatGPT-maker testified in favor of an Illinois bill that would limit when AI labs can be held liable—even in cases where their products cause "critical harm.":

OpenAI is throwing its support behind an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious societal harms, such as death or serious injury of 100 or more people or at least $1 billion in property damage.

[...] The bill would shield frontier AI developers from liability for "critical harms" caused by their frontier models as long as they did not intentionally or recklessly cause such an incident, and have published safety, security, and transparency reports on their website. It defines a frontier model as any AI model trained using more than $100 million in computational costs, which likely could apply to America's largest AI labs, like OpenAI, Google, xAI, Anthropic, and Meta.

"We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses—small and big—of Illinois," said OpenAI spokesperson Jamie Radice in an emailed statement. "They also help avoid a patchwork of state-by-state rules and move toward clearer, more consistent national standards."

Under its definition of critical harms, the bill lists a few common areas of concern for the AI industry, such as a bad actor using AI to create a chemical, biological, radiological, or nuclear weapon. If an AI model engages in conduct on its own that, if committed by a human, would constitute a criminal offense and leads to those extreme outcomes, that would also be a critical harm. If an AI model were to commit any of these actions under SB 3444, the AI lab behind the model may not be held liable, so long as it wasn't intentional and they published their reports.

[...] "At OpenAI, we believe the North Star for frontier regulation should be the safe deployment of the most advanced models in a way that also preserves US leadership in innovation," Niedermeyer said.

Scott Wisor, policy director for the Secure AI project, tells WIRED he believes this bill has a slim chance of passing, given Illinois' reputation for aggressively regulating technology. "We polled people in Illinois, asking whether they think AI companies should be exempt from liability, and 90 percent of people oppose it. There's no reason existing AI companies should be facing reduced liability," Wisor says.

[...] Years into the AI boom, there's still an open legal question around what happens if an AI model causes a catastrophic event.


Original Submission

posted by hubie on Monday April 13, @02:11PM   Printer-friendly

Intel's solution in its most aggressive setting provides a similar texture compression ratio to Nvidia's counterpart:

Intel is developing its own version of neural compression technology, which will reduce the footprint of video game textures in VRAM and/or storage, similar to Nvidia's NTC. Intel's solution can achieve a 9x compression ratio in its quality mode and an 18x compression ratio in its more aggressive setting. The GPU maker also announced it will have two versions of the tech for different hardware, similar to XeSS. One will be tuned for its XMX engine while the other will be designed to run on traditional CPU and GPU cores at the expense of performance.

Intel is using BC1 texture compression and linear algebra for the XMX-accelerated portion of its neural texture compression technology. BC1 takes advantage of a "feature pyramid" that compresses four BC1 textures with MIP-chains. Compared to traditional compression, Intel's neural compression uses weights to compress textures with minimal loss to image quality. An encoder is responsible for encoding the textures, and a decoder is responsible for the decompression stage.

Intel noted four ways developers can deploy its texture compression, aimed at accelerating install times, saving disk space, or saving VRAM. The first is aimed at saving space on a server and reducing file size downloads by compressing textures beforehand, uploading those files to a server, then having the client download those textures and decompressing the textures on local storage.

The next three revolve around gameplay itself; one of these is streaming in textures as the game loads, the next involves streaming textures during gameplay, and the last one is loading textures on the fly without holding textures in VRAM (the latter is likely aimed at low VRAM GPUs).

Intel's compression tech has two modes of operation: a variant A mode that runs at higher quality and a variant B mode that sacrifices quality for higher compression. Intel claims variant A can take two of the first 4096 x 4096 64MB textures in a feature pyramid and compress them down to 10.7 MB each while retaining the 4K texture size. The remaining bottom two 4K by 4K pyramid feature textures are reduced to half their resolution and are compressed down to 2.7 MB.

With variant B, the textures are compressed more aggressively. The first texture in a feature pyramid is compressed down to 10.7 MB while retaining its resolution, the second texture is reduced down to half its normal resolution and compressed down to 2.7 MB, and the third texture's resolution is reduced to quarter resolution and compressed to 0.68 MB. The last texture's resolution is reduced to one-eighth of the texture's resolution and compressed down to 0.17 MB.

In Intel's own testing, it compared its variant A and variant B texture compression using BC1, against an industry standard compression format using a 3xBC1 plus 1xBC3 format. Variant A achieved over a 9x compression ratio, and variant B an 18x compression ratio over the aforementioned industry standard format, which was only capable of a 4.8x compression ratio.

Intel's new texture compression tech is achieving almost the same compression ratios as Nvidia's own neural texture compression using variant B. It still remains to be seen whether Nvidia or Intel's solution provides better quality, but Intel is the only one of the three major Western GPU manufacturers to have a solution that works on graphics cards besides its own (for now).


Original Submission

posted by hubie on Monday April 13, @09:24AM   Printer-friendly

Pair backs scraper blocking and standards to separate trusted agents from bad bots

Citing the need to adapt to an internet increasingly serving the needs of AI agents without considering the needs of site owners, Cloudflare and GoDaddy are partnering on efforts to control how AIs crawl the web and interact with web content.

The content delivery network and the web host on Tuesday announced that they would help website owners gain better control over their relationship with AI, primarily through GoDaddy integrating Cloudflare's AI Crawl Control utility into its platform. That tool, as the pair explained in a press release, lets owners manage how AI interacts with their websites, allowing, blocking, or requiring payment from crawlers for access.

"By putting tools like AI Crawl Control and open standards into the hands of website owners, we are providing essential underpinnings for a new Internet business model," Cloudflare chief strategy officer Stephanie Cohen said of the move. "We want to ensure that every creator has the tools to verify who is interacting with their site, while giving legitimate AI agents a secure, transparent way to participate."

Cloudflare has been beating the drum over the need to control bots' access to websites and web content, and has rolled out several measures aimed at restricting unauthorized scraping in recent years. In 2025, it rolled out an AI that it said would trap and waste the time of unauthorized AI scrapers by feeding them endless garbage, and it has previously pushed to require AI bots to pay for access to websites. 

Charging bots was one of Cloudflare's ideas to help protect website operators, who the CDN has noted are losing tons of money earned via web traffic since many search visitors are now instead fed the answers they're looking for by an AI, like Google's AI Overviews. Therefore, visitors are often less likely to click through to the original source.

The pair have skin in this game, clearly, as without website owners making money, they're unlikely to get paid themselves.

If you set up roadblocks to stop bad bots, which the pair note is the point of this endeavor, then you're naturally going to end up catching a lot of good bots in the process, which is why Cloudflare and GoDaddy have also expressed support for new standards they believe will keep good bots in operation while restricting the reach of bad ones. 

The pair expressed support for the Agent Name Service (ANS), a proposal made last year that would act like a DNS system for AI agents, creating an open, protocol-agnostic registry of AI agents that would allow them to operate with a degree of trust and assurance by linking them to controllers, among other things. 

ANS was ultimately built by GoDaddy, we note, and is available on GitHub. The pair also threw their weight behind Cloudflare's Web Bot Auth method that relies on cryptographic signatures in HTTP messages to determine whether a request comes from an AI bot. The two technologies, the pair said, allow AI agents to identify themselves through cryptographically signed requests.

"With an open ecosystem of standards and methods for identifying agents, the agentic web can evolve with transparency built in by default," the pair said. 

[...] That's not critical mass for AI agent standards to be considered … well, standard, but it's a start. It's also a helluva lot more likely to succeed than Sam Altman's eyeball-scan-for-an-AI-agent-license scheme, so there's that, too. 

Either way, something has to happen soon: MIT CSAIL's 2025 AI Agent Index, published in February, found that AI bots regularly ignore robots.txt restrictions, and few have released any safety data. Universally agreed upon rules are needed as they proliferate and change the shape of the internet. 


Original Submission

posted by hubie on Monday April 13, @04:35AM   Printer-friendly

[Ed. note: Little Snitch is a macOS program that intercepts network traffic at the kernel level to let you know what connections your applications are making behind the scenes]

Little Snitch for Linux — Because Nothing Else Came Close

https://obdev.at/blog/:

Recent political events have pushed governments and organizations to seriously question their dependence on foreign-controlled software. The core issue is simple and uncomfortable: through automatic updates, a vendor can run any code, with any privileges, on your machine, at any time. Most people know this, but prefer not to think about it. Linux is the obvious candidate for reducing that dependency: no single company controls it, no single country owns it. So I decided to explore it myself.

[...] Very soon after that, I felt kind of naked: being used to Little Snitch, it's a strange feeling to have no idea what connections your computer is making. I researched a bit, found OpenSnitch, several command line tools, and various security systems built for servers. None of these gave me what I wanted: see which process is making which connections, and in the best case deny with a single click.

[...] To make a long story short: I decided to use eBPF for traffic interception at kernel level. It's high performance and much more portable than kernel extensions. The main application code is in Rust, a language I've wanted to explore for quite a while. And the user interface was built as a web application. That last choice might seem odd for a privacy tool, but it means you can monitor a remote Linux server's network connections from any device, including your Mac. Want to know what Nextcloud, Home Assistant, or Zammad are actually connecting to? Use Little Snitch on the server.

[...] The kernel component, written for eBPF, is open source and you can look at how it's implemented, fix bugs yourself, or adapt it to different kernel versions. The UI is also open source under GPL v2, feel free to make improvements. The backend, which manages rules, block lists, and the hierarchical connection view, is free to use but not open source. That part carries more than twenty years of Little Snitch experience, and the algorithms and concepts in it are something we'd like to keep closed for the time being.

One important note: unlike the macOS version, Little Snitch for Linux is not a security tool. eBPF provides limited resources, so it's always possible to get around the firewall for instance by flooding tables. Its focus is privacy: showing you what's going on, and where needed, blocking connections from legitimate software that isn't actively trying to evade it.

blog post: https://obdev.at/blog/
software: https://obdev.at/products/littlesnitch-linux/index.html

 

Little Snitch on Linux: A Proprietary "Solution" to a Solved Problem

We have better, more open ways to build our walls:

There is a bit of a stir in the Linux community this week. Little Snitch, the venerable gatekeeper of macOS network traffic, has finally made its way to our shores. On paper, it is an impressive bit of engineering. It utilises eBPF for high-performance kernel-level monitoring and is written in Rust, which is enough to make any technical enthusiast's ears perk up. It even sports a fancy web UI for those who prefer a mouse to a terminal.

But as I looked closer, the gloss started to peel. While parts of the project are open, the core logic, the "brain" that actually decides what to block and how to analyse your traffic, is closed source.

For a FOSS enthusiast, this is a total non-starter. We don't migrate to Linux just to swap one proprietary black box for another. If I cannot audit the code that sits between my binaries and the internet, I am not interested. A security tool that asks for blind trust is an oxymoron. In my home lab, if the code isn't transparent, the binary doesn't get executed. It is that simple.

As I've detailed before on this blog in The DNS Safety Net, my primary line of defence is AdGuard Home. By handling privacy at the DNS level, I have a silent, network-wide shield that catches the vast majority of telemetry, trackers, and "phone home" attempts before they even leave my Proxmox nodes.

[...] Even at the application level, I already have better alternatives in place. For this blog, I use Wordfence. It acts as a localised firewall, monitoring for malicious traffic and unauthorised changes right at the source. Between network-wide DNS filtering and application-specific security, the layers are already there. Adding a proprietary binary into that mix adds complexity without adding meaningful trust.

[...] If I ever needed to track down which specific application is making suspicious outbound connections, I would turn to OpenSnitch, the fully open-source, community-driven application firewall for Linux. It is not as polished as the new Little Snitch port, but every line of its code is open for inspection and it does not ask for blind trust.

My network is quiet, my logs are clean, and my gatekeeper is a piece of transparent software I host myself. Until a tool comes along that respects both my privacy and the FOSS ethos I live by, that is not going to change. If you are serious about your own data, you should keep your gatekeepers open and your network controlled at the edge.


Original Submission

posted by hubie on Sunday April 12, @11:48PM   Printer-friendly

Scientists have pushed the limits of mammal cloning until the whole house of cards has come tumbling down:

After two decades of continuous work, researchers in Japan have discovered a genetic 'dead end' to mammal cloning.

The study began in 2005, when researchers, led by scientists at the University of Yamanashi in Japan, cloned a single female mouse.

They then re-cloned that clone by transferring its nuclear DNA into an egg 'emptied' of nuclear DNA, and so on and so forth, for 57 more generations, producing more than 1,200 mice from that single original donor.

Two decades later, the team was on their 58th generation, and the re-cloned mice had accumulated so many genetic mutations that they died the day after they were born.

The study is the first peer-reviewed research to 'serially' clone a mammal to this end.

"It has long been unclear whether mammals, unlike plants and some lower animals, could sustain their species through clonal reproduction alone," write the research team, led by geneticist Sayaka Wakayama.

"[O]ur results align closely with Muller's ratchet theory," they add. "This model predicts that in asexual lineages, deleterious mutations inevitably accumulate, ultimately producing mutational meltdown and extinction."

Since the first mammal was cloned in the mid-1990s, famously called Dolly the Sheep , scientists have learned a great deal about the whole process , and how to recreate an animal using very few cells.

Some conservationists hope that the practice can one day help us bring back species from the brink of extinction, and a few celebrities have even started cloning their pets .

While this might work for a while, over time, as clones are re-cloned and then re-cloned again, dangerous mutations can accumulate in the genome. How long this takes to kill a creature is unknown, and scientists in Japan wanted to find out using mice.

For the team's first 25 cloning attempts, the re-cloned mice looked no different to the original genetic donor. In fact, success rates improved with each generation of clones, leading the authors to suspect "it may be possible to reclone animals indefinitely".

But then, something changed. The success rates of the cloned mice gradually declined before suddenly coming to an end.

It seemed that the mice had somehow lost their ability to efficiently eliminate chromosomal abnormalities and coding mutations.

Loss of the X chromosome became a prominent problem after the 25th generation of clones, and the frequency of deleterious mutations nearly doubled by the 57th generation.

Even those carrying mutations, however, lived normal lifespans – until generation 58, that is.

"Although serial cloning could not continue beyond the 58th generation (G58), the re-cloned mice remained healthy except G58, raising the possibility that subsequent generations could be produced via sexual reproduction," the authors suggest .

To test that idea, the team took female mice from the 20th, 50th, and 55th generations and mated them with normal male mice. The 20th-generation clones had similar litter sizes to control mice, but 50th- and 55th-generation clones had dramatically smaller litters.

Still, when those offspring lineages produced grandchildren of the clones with normal mice, their litter sizes increased again to a healthy number.

The findings suggest that mammal species can be surprisingly tolerant of genetic mutations, remaining fit and able to reproduce even in the face of widespread genetic alterations.

The study, the authors say, reaffirms "the evolutionary inevitability that sexual reproduction is indispensable for the long-term survival of mammalian species".

Journal Reference: Wakayama, S., Ito, D., Inoue, R. et al. Limitations of serial cloning in mammals. Nat Commun 17, 2495 (2026). https://doi.org/10.1038/s41467-026-69765-7


Original Submission